Risk versus Uncertainty in Deep Learning: Bayes, Bootstrap and the Dangers of Dropout

نویسنده

  • Ian Osband
چکیده

The “Big Data” revolution is spawning systems designed to make decisions from data. In particular, deep learning methods have emerged as the state of the art method in many important breakthroughs [18, 20, 28]. This is due to the statistical flexibility and computational scalability of large and deep neural networks which allows them to harness the information of large and rich datasets. At the same time, elementary decision theory shows that the only admissible decision rules are Bayesian [5, 30]. Colloquially, this means that any decision rule which is not Bayesian can be strictly improved (or even exploited) by some Bayesian alternative [6]. The implication of these results is clear: combine deep learning with Bayesian inference for the best decisions from data.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bayes By Backprop Neural Networks for Dialogue Management

In dialogue management for statistical spoken dialogue systems, an agent learns a policy that maps a belief state to an action for the system to perform. Efficient exploration is key to successful dialogue policy estimation. Current deep reinforcement learning methods are very promising but rely on ε-greedy exploration, which is not as sample efficient as methods that use uncertainty estimates,...

متن کامل

Safe Visual Navigation via Deep Learning and Novelty Detection

Robots that use learned perceptual models in the real world must be able to safely handle cases where they are forced to make decisions in scenarios that are unlike any of their training examples. However, state-of-the-art deep learning methods are known to produce erratic or unsafe predictions when faced with novel inputs. Furthermore, recent ensemble, bootstrap and dropout methods for quantif...

متن کامل

Altitude Training: Strong Bounds for Single-Layer Dropout

Dropout training, originally designed for deep neural networks, has been successful on high-dimensional single-layer natural language tasks. This paper proposes a theoretical explanation for this phenomenon: we show that, under a generative Poisson topic model with long documents, dropout training improves the exponent in the generalization bound for empirical risk minimization. Dropout achieve...

متن کامل

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

Deep learning has gained tremendous attention in applied machine learning. However such tools for regression and classification do not capture model uncertainty. Bayesian models offer a mathematically grounded framework to reason about model uncertainty, but usually come with a prohibitive computational cost. We show that dropout in neural networks (NNs) can be cast as a Bayesian approximation....

متن کامل

Uncertainty Estimates for Efficient Neural Network-based Dialogue Policy Optimisation

In statistical dialogue management, the dialogue manager learns a policy that maps a belief state to an action for the system to perform. Efficient exploration is key to successful policy optimisation. Current deep reinforcement learning methods are very promising but rely on ε-greedy exploration, thus subjecting the user to a random choice of action during learning. Alternative approaches such...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016